Revision � neuro2 finals citations 02

Greg Detre

Sunday, 09 June, 2002

 

Misc

Carpenter, �Neurophysiology�, 1996

Auditory

Sound localisation

Pickles (1988), �An introduction to the physiology of hearing�

Kaas, Hackett and Tramo (1999), �Auditory processing in primate cerebral cortex� in Current opinion in Neurobiology

Helmholtz resonance theory � the cross striations resonate with different frequencies of sounds (like piano strings which of different length/stiffness)

von Bekesy tested this and found that each sound does not lead to the resonance of only one narrow segment of the basilar membrane, but initates a traveling wave along the length of the cochlea that starts at the oval window (like snapping a rope tied at one end to a post)

the volley principle (Weaver) � several fibres phase-locking (preferential firing at a particular point of the sound wave) at different cycles of a high-frequency stimulus might work in concert to signal high frequencies

2 main cells of the ventral cochlear nucleus (Oertel). when depolarised by steady current pulse:

stellate cell � generates a spaced series of action potentials at regular intervals, called a chopper resopnse

bushy cell � generates only one or two spikes at the beginning of the pulse, signaling the onset and timing (important for localisation)

organisation of the primate auditory cortex (Brugge)

Kaas & Hackett, 1999, �What� and �where� processing in auditory cortex

Romanski et al examined the connectivity of higher auditory cortical areas in macaque monkeys, using a combination of anatomical tracer dyes with electrophysiological recordings. their results support the ventral/dorsal temporal/parietal what/where processing dichotomy, contributing to functionally distinct regions of the frontal lobe

Moore & King, 1999, Auditory perception: the near and far of sound localisation

Most experiments on auditory localisation have been concerned with horizontal and vertical positions of sound sources, ignoring the third dimension - distance. There has been little work on this since von Bekesy, until Bronkhorst and Houtgast's demonstration using virtual sound technology that �the perception of sound distance in an enclosed room by human listeners can be quite simply modelled by fitting a temporal window around the ratio of direct-to-reverberant sound energy; and Graziano et al have shown that neurons in the frontal cortex of monkeys respond preferentially to sounds presented at particular near distances, within a hand grasp of the monkey's head.�

There is less evidence for distance tuning in the auditory system of non-echolocating animals. Graziano recorded from the monkey's ventral premotor cortex which, like the superior colliculus, is a multisensory area involved in the sensory guidance of movement. They have shown that the auditory receptive fields of ventral premotor cortex neurons, like their visual counterparts, rarely extend beyond 30cm from the head (and are therefore restricted to a region of space within the monkey's reach).

"There is, however, a fundamental difference in the way that auditory distance was examined in the two studies. The virtual space stimuli used by Bronkhurst and Houtgast simulated source distances of a metre or more - the far field, and refers to the region of space within which both monaural and binaural cue values are essentially independent of distance. In contrast, the distances in the Graziano et al. study were within the near field, a more complex region within which energy circulates without propagating. Monaural spectral cues and interaural level differences associated with near-field sound sources therefore vary with distance, providing a possible basis for distance discrimination by both individual neurons and human listeners. This is obviously useful for localising nearby sounds, but doesn't establish whether auditory neurons in non-echolocating mammals are sensitive to the other cues available for more distant sound sources."

Payne had shown that the owls make use of their superb scotopic vision to pounce on mice in very dim light. However, they are still able to catch the mouse even if the lights are turned off as the owl is leaving its perch, as long as the mouse makes some sort of noise. This ability is impaired if the owl's ears are plugged.

"Eric Knudsen and Mark Konishi took this behavior into the lab and developed a technique to assess the animal's ability to localize sound. They trained an owl to sit on a perch in a dark, sound proof, anechoic room and attend to a sound produced by a speaker that could be positioned anywhere about the owl. When an owl localizes a sound, it turns its head, toward the direction of the sound. The owl's localization of sound was monitored by recording its head position with a special device -- a simple detector mounted on the head recorded induced electric currents produced by a pair of electric coils mounted around the animal's head. The speaker was mounted on a track and its position was controlled by a computer."

"Takahashi, Moiseff and Konishi [in 1984] used a local anesthetic to show that binaural intensity differences giving rise to space-specific cells in the MLD require the normal activity of the nucleus Magnocellularis. The processing of OTD requires the nucleus Angularis. Manley, Koppl, Carr and Konishi [in 1988]) showed that IID from L/R Nucleus Angularis is processed in the VLVp region of the Nucleus of the Lateralis Lemniscus while OTD from L/R Nucleus Magnocellularis is processed in the Nucleus Laminaris"

Konishi & Knudsen�s (1977) study with owls� sound localisation abilities

Most experiments on auditory localisation have been concerned with horizontal and vertical positions of sound sources, ignoring the third dimension - distance. There has been little work on this since von Bekesy, until Bronkhorst and Houtgast's study using virtual sound technology (the cues provided by the head, the ears and the room are measured, digitally synthesised and mixed with the acoustic characteristics of the presenting headphones, making them indistinguishable from real sounds presented within rooms by distant loudspeakers), and Graziano et al.�s work with frontal cortex neurons in monkeys. Bronkhurst and Houtgast simulated source distances of a metre or more, while the Graziano et al. study focused on the near field. They showed that small distance processing can rely on monaural spectral cues and interaural level differences, but it is not yet clear whether auditory neurons in non-echolocating mammals are sensitive to the other cues available for more distant sound sources.

Moore and King consider the idea that perception (in all three major sensory systems) divides neatly into what and where processing pathways. This thesis is probably clearest and most defensible for the visual system, where support comes from lesion studies in monkeys, in which impairment of spatial abilities or object identification can be separated, and anatomical studies which show that connections in the visual pathways are fairly strongly segregated. Of course, the auditory system too must analyse both identity and location of stimuli, but it is not clear to what extent these are functionally and anatomically isolated.

 

Central auditory pathways

Ears

Somatosensory

Johnson (2001), The roles and functions of cutaneous mechanoreceptors

Killackey (1995), The formation of a cortical somatotopic map (in rodents)

Further evidence that the formation of of a vibrissae-related pattern is extrinsic to the neocortex is the demonstration that the visual cortex of the neonatal rat, when transplanted to the region of the somatosensory cortex, is capable of supporting ingrowth of thalamocortical afferents, and teh expression of a vibrissae-related pattern (Schlagger & O'Leary, 1994).

Zhang et al. (2001), Functional characteristics of the parallel SI- and SII-projecting neurons of the thalamic ventral posterior nucleus in the marmoset � abstract only

Huffman & Krubitzer (2001), Thalamo-cortical connections of areas 3a and M1 in marmoset monkeys

Kaas & Collins (2001), The organisation of sensory cortex

Bierman et al. (1998), Interaction of finger representation in the human first somatosensory cortex

Proske et al. (2000), The role of muscle receptors in the detection of movements

Melzack and Wall: 1962 explanation for this antagonism between larger cutaneous afferents and smaller pain fibres

 

 

 

Touch receptors

Somatosensory plasticity

Spinal pathways

Somatosensory system

Pain

Vision

Snellen chart in opticians

how distinct are they anatomically - Young (Nature, 1992, pg 155)

Livingstone + Hubel were wrong - inputs are not purely p- and m- - it's all mixed up

Hendry & Calkins, 1998, Neuronal chemistry and functional organization in the primate visual system

The gain of the vestibular-ocular and opto-kinetic reflexes adapt, so we can adapt over a few days to reversing prisms and glasses, which change how head movement effects retinal movement. This requires the cerebellar flocculus � Miles et al. found that flocculuar Purkinje cells int eh monkey respond to the visual signal that arises from the mismatch of head velocity and eye velocity.

transform these �tiny, distorted, upside-down images� (Gregory 1966) into the three-dimensional mental constructs we see, we have to construct this visual representation from �unconscious inferences� (Rock, 1984) and ambiguous data.

Zeki first showed that the visual system seems to operate using perhaps three parallel pathways, which can be roughly characterised as being for analysing:

what (parvo-cellular inter-blob) � object recognition

where (magno-cellular) � position and motion in a three-dimensional world

colour (parvo-cellular blob)� allowing us to distinguish equiluminances

the visual system should be able to compare the previous location of an object with its current location by extracting the necessary information from the retina. This is complicated by the fact that information about the direction of motion from a small receptive field can be ambiguous. For example, the aperture problem (Movshon, 1990) demonstrates that if a grating of diagonal lines is moved either downwards, sideways or perpendicular to the gratings, then it will always appear to move in the same right-downwards direction � in order to be sure, information from two separate local areas needs to be taken

The difficulties involved increase with more complex objects and surfaces moving in three dimensions. Problems like the aperture problem highlight the need for a more complex solution, prompting researchers like Marr and Movshon to propose that information about motion in the visual field is extracted in two stages. The first stage is concerned with one-dimensional moving objects and measuring the motion of the components of complex objects. The second stage involves higher-order neurons combining and integrating the components of motion analysed by several of the initial stage neurons.

Lashley � cast doubt on simple-minded views of localisation

the effect on rats in a maze depended on the quantity of cortex removed, not where from

Retina etc.

LGN + V1

Superior colliculus, pretectum

Motion

Depth + stereopsis

Kandel & Schwarz 2000

De Angelis & Cumming (2001), �The physiology of vision�

Garnham, �Artificial intelligence�

Mayhew and Frisby (1981), �Psychophysical and computational studies towards a theory of human stereopsis�, Artificial Intelligence, 17: 349-85

 

 

Form

Colour

Parietal

human and monkey studies:the posterior parietal cortex has a role in programming actions and in transforming sensory signals into plans for motor behaviours (Mountcastle et al. 1975, Andersen et al. 1992)

different areas of the posterior parietal = functionally different (Sakata et al, 1997)

lateral intraparietal area (LIP) � sacades

posterior parietal region (PRR) � planning reaching movements

anterior intraparietal area (AIP) � grasping

Sakata (1997), The parietal association cortex in depth perception and visual control of hand action

Iwaniak & Whishaw (2000), On the origin of skilled forelimb movements

Goodale (1998), Frames of Reference for Perception and Action in the Human Visual System

a vital role in the visuomotor stream (following Goodale & Haffenden�s (1998) terminology) used for directing action, often at a subconscious level. This seems to involve the representation and transformation between various coordinate frames of reference, in what Andersen et al (1997) terms a �multimodal representation of space�. More controversially, the parietal cortex also seems to contain neuronal activity relating to attention, intention and decision.

 

Metelli et al (1994): AIP and the premotor areas are connected reciprocally

Sakata et al (1997): many AIP neurons are selective for object shape and size

mirror neurons � active both when the monkey performs and observes an action � Rizzolatti, Fogassi and Gallese � basic system of action recognition, and creating an internal model

 

Andersen (1997), Multimodal representation of space in the posterior parietal cortex and its use in planning movements

Andersen et al (1992) speculate that LIP is the 'parietal eye field', 'specialised for visual-motor transformation functions related to saccades', on the basis of the strong direct projections from extrastriate visual areas and projections to various cortical and subcortical areas concerned with saccadic eye movements, and results from electrical stimulation.

Andersen claims that areas 7a and LIP use their eye position and retinal input signals to represent the location of a visual target with respect to the head, a 'head-centred reference frame'. He concedes that 'intuitively one would imagine that an area representing space in a head-centred reference frame would have receptive fields that are anchored in space with respect to the head', but proposes instead that instead a highly distributed pattern is used to uniquely specify each head-centred location in the activity across a population of cells with different eye position and retinal position sensitivities. Indeed, he argues that 'when neural networks are trained to transform retinal signals into head-centred coordinates by using eye position signals, the middle-layer units that make the transformation gain fields similar to the cells in the parietal cortex (Zipser & Andersen, 1988)'.

Snyder et al (1993) showed that supplying solely vestibular signals (in the dark to isolate from visual input), or solely proprioceptive cues (rotating the trunk while keeping the head fixed), both elicit responses from these cells in 7a and LIP, indicating that both of these sources are used in constructing the body-centred frame. Furthermore, input from the vestibular and visual systems (e.g. landmarks and optic flow) can contribute to a world-centred representation.

Mazzoni et al (1996) recently demonstrated that when a monkey is required to memorize the location of an auditory target in the dark and then to make a saccade to it after a delay, there is activity in LIP during the presentation of the auditory target and during the delay period. This auditory response generally had the same directional preference as the visual response, suggesting that the auditory and visual receptive fields and memory fields may overlap one another. The above experiments were done when the animal was fixating straight ahead, with its head also oriented in the same direction. Under these conditions, the eye and head coordinate frames overlap. However, if the animal changes the orbital position of its eyes, then the two coordinate frames move apart. Do the auditory and visual receptive fields in LIP move apart when the eyes move, or do they share a common spatial coordinate frame?

Stricanne et al (1996) showed that almost half of the auditory-responding cells in LIP coded the auditory location in eye-centred coordinates, like in the superior colliculus, where auditory fields are also in eye-centred coordinates (Jay & Sparks, 1984).

Handily, MSTd contains cells selective for one or more of the following: expansion-contraction, rotation and linear motion (Saito, 1986). However, it appears that MSTd is not decomposing the optic flow into channels of expansion, rotation and linear motion - Andersen produced a spiral space with expansion on one axis and rotation on another, and found that disappointingly few of the MSTd neurons had tuning curves aligned directly along these axes

Perrone & Stone's (1994) and Warren's (1995) similar models require more neurons for separate heading maps for different combinations of eye direction and speed (rather than just eye movement)(???).

In Xing et al's (1995) model, which takes in head-centred auditory signals and eye position and retinal position signals as input, and whose output codes the metrics of a planned movement in motor coordinates, the middle layers develop overlapping receptive fields for auditory and visual stimuli and eye position gain fields. It is interesting that the visual signals also develop gain fields, since both the retinally based stimuli and the motor error signals are always aligned when training the network and, in principle, do not need to use eye position information. However, the auditory and visual signals share the same circuitry and distributed representation, which results in gain fields for the visual signals.

No coordinate transformation is necessary for a simple visual saccade. However, there are occasionally times when the oculomotor ocordinates are in a different frame from sensory-retinal coordinates (e.g. displacement of the eye from electrical stimulation or an intervening saccade), yet the cells in the PPC, frontal eye fields and superior colliculus are still able to code the impending movement vector, even though no visual stimulus has appeared in their receptive fields. Krommenhoek et al's (1993) and Xing et al's (1995) networks were able to replicate this result, both developing eye gain fields in the hidden layer. The Xing et al neural network was trained on a double-saccade task; it inputted two retinal locations and then outputted the motor vectors of two eye movements, first to one target and then to the other. In order to program the second saccade accurately, the network was required to use the remembered retinal location of the first target and update it with the new eye position. This implies that an implicit distributed representation of head-centred location was formed in the hidden layer.

Gnadt & Andersen (1988) have shown that activity in cells primarily in LIP (coding in oculomotor coordinates) precedes saccades. This activity is also memory-related, e.g. lighting up when a monkey is remembering the location of a briefly-flashed stimulus and, after a delay, made a saccade to the remembered location. Glimcher & Platt required an animal to attend to a distractor target, which was extinguished as a cue to saccade to the selected target, thus separating the focus of attention from the selected movement. For many of the cells, the activity reflected the movement plan and not the attended location, although the activity of some cells was influenced by the attended location. Andersen thinks that these and other studies suggest that a component of LIP activity is related to movements that the animal intends to make.

Mazzoni et al (1996) used a delayed double-saccade experiment to try and distinguish whether the memory activity was primarily related to intentions to make eye movements or to a sensory memory of the location of the target. They found both types of cells, with the majority of overall activity being related to the next intended saccade and not to the remembered stimulus location. This did not necessarily lead to execution of the movement, since the animals could be asked to change their planned eye movements during the delay period in a memory saccade task, and the intended movement activity in LIP would change correspondingly (Bracewell et al, 1996).

Bushnell et al (1981) recorded from PPC neurons while the animal programmed an eye or reaching movement to a retinotopically identical stimulus. They claimed that the activity of the cells did not differentiate between these two types of movements, indicating that the PPC is concerned with sensory location and attention and not with planning movements. However, when Andersen et al repeated the experiment, they found that 2/3 of cells in the PPC were selective during the memory period for whether the target requires an arm or eye movement.

Andersen considers Duhamel et al, 1992 (similar to Gnadt & Andersen, 1988) and Kalaska & Crammond, 1995 as studies in which their theory that the memory-related activity in the PPC signals the animal's plan to make a movement could explain the results.

Husain & Jackson (2001), Visual space is not what it appears to be

Probe stimuli that appear very briefly at a wide range of stimulus locations immediately prior to the execution of a saccadic eye movement are not perceived to be in their veridical positions, but are instead reported to be at locations compressed towards the target of the saccadic eye movement - the intended new point of fixation or direction of gaze. If no saccadic eye movement is planned, then the location is misreported closer to the point of fixation (Ross & Morrone, 1997).

When the intraparietal sulcus is lesioned in humans, a profound spatial deficit which Husain & Jackson explain as an impairment in spatial re-mapping across saccades when tested on a double-saccade task [16,17], in that they are unable to '[make] saccades commensurate with the original retinal position of the second target', i.e. failing to take into account the new eye position after the first saccade. They suggest that this may account for one component of the hemispatial neglect syndrome which follows parietal damage.

Stein (1992) � 2 characteristics of all posterior parietal neurons:

1.      combinations of sensory, motivational and motor information are received

2.      response is greatest when the animal attends to, or moves towards, a target

we might then expect posterior parietal neurons to be transforming sensory information into commands for directing attention and guiding motor outputs

spatial deficits � perhaps due to damage to temporal-parietal polysensory regions(Goodale & Milner, 1993), rather than to the dorsal stream�s role in visuomotor guidance

right hemisphere lesions (greater polysensory growth in the right hemisphere) �/span> greater deficits on complex spatial tasks

Goodale & Haffenden (1998) It appears as though DF's visual system is no longer able to deliver perceptual information abotu the size, shape and orientation of objects, yet the visuomotor systems that control the programming and execution of visually-guided actions remain sensitive to these same object features.

Goodale and Milner (1992) have propposed that these two streams correlate with the 'dorsal' and 'visual' streams identified in the cerebral cortex of the monkey

�When the eyes move, the focus tuning curve of these cells shifts in order to compensate for the retinal focus shift due to the eye movement. In this way MSTd could map out the relationship between the expansion focus and heading with relatively few neurons, each adjusting its focus preference according to the velocity of the eye.� This pursuit compensation is achieved by a non-uniform gain and distortion applied to different locations in the receptive field. Andersen acknowledges that Perrone & Stone's (1994) and Warren's (1995) models are similar, but require more neurons for separate heading maps for different combinations of eye direction and speed (rather than just eye movement). Andersen goes so far as to say that MSTd may compensate spatially for the consequences of eye movements for all patterns of motion.